235 research outputs found

    A survey on signature-based Gr\"obner basis computations

    Full text link
    This paper is a survey on the area of signature-based Gr\"obner basis algorithms that was initiated by Faug\`ere's F5 algorithm in 2002. We explain the general ideas behind the usage of signatures. We show how to classify the various known variants by 3 different orderings. For this we give translations between different notations and show that besides notations many approaches are just the same. Moreover, we give a general description of how the idea of signatures is quite natural when performing the reduction process using linear algebra. This survey shall help to outline this field of active research.Comment: 53 pages, 8 figures, 11 table

    On formulas for decoding binary cyclic codes

    Get PDF
    We adress the problem of the algebraic decoding of any cyclic code up to the true minimum distance. For this, we use the classical formulation of the problem, which is to find the error locator polynomial in terms of the syndroms of the received word. This is usually done with the Berlekamp-Massey algorithm in the case of BCH codes and related codes, but for the general case, there is no generic algorithm to decode cyclic codes. Even in the case of the quadratic residue codes, which are good codes with a very strong algebraic structure, there is no available general decoding algorithm. For this particular case of quadratic residue codes, several authors have worked out, by hand, formulas for the coefficients of the locator polynomial in terms of the syndroms, using the Newton identities. This work has to be done for each particular quadratic residue code, and is more and more difficult as the length is growing. Furthermore, it is error-prone. We propose to automate these computations, using elimination theory and Grbner bases. We prove that, by computing appropriate Grbner bases, one automatically recovers formulas for the coefficients of the locator polynomial, in terms of the syndroms

    On the Complexity of the F5 Gr\"obner basis Algorithm

    Get PDF
    We study the complexity of Gr\"obner bases computation, in particular in the generic situation where the variables are in simultaneous Noether position with respect to the system. We give a bound on the number of polynomials of degree dd in a Gr\"obner basis computed by Faug\`ere's F5F_5 algorithm~(Fau02) in this generic case for the grevlex ordering (which is also a bound on the number of polynomials for a reduced Gr\"obner basis, independently of the algorithm used). Next, we analyse more precisely the structure of the polynomials in the Gr\"obner bases with signatures that F5F_5 computes and use it to bound the complexity of the algorithm. Our estimates show that the version of~F5F_5 we analyse, which uses only standard Gaussian elimination techniques, outperforms row reduction of the Macaulay matrix with the best known algorithms for moderate degrees, and even for degrees up to the thousands if Strassen's multiplication is used. The degree being fixed, the factor of improvement grows exponentially with the number of variables.Comment: 24 page

    Moment Varieties of Gaussian Mixtures

    Full text link
    The points of a moment variety are the vectors of all moments up to some order of a family of probability distributions. We study this variety for mixtures of Gaussians. Following up on Pearson's classical work from 1894, we apply current tools from computational algebra to recover the parameters from the moments. Our moment varieties extend objects familiar to algebraic geometers. For instance, the secant varieties of Veronese varieties are the loci obtained by setting all covariance matrices to zero. We compute the ideals of the 5-dimensional moment varieties representing mixtures of two univariate Gaussians, and we offer a comparison to the maximum likelihood approach.Comment: 17 pages, 2 figure

    Polynomial-Time Algorithms for Quadratic Isomorphism of Polynomials: The Regular Case

    Get PDF
    Let f=(f_1,,f_m)\mathbf{f}=(f\_1,\ldots,f\_m) and g=(g_1,,g_m)\mathbf{g}=(g\_1,\ldots,g\_m) be two sets of m1m\geq 1 nonlinear polynomials over K[x_1,,x_n]\mathbb{K}[x\_1,\ldots,x\_n] (K\mathbb{K} being a field). We consider the computational problem of finding -- if any -- an invertible transformation on the variables mapping f\mathbf{f} to g\mathbf{g}. The corresponding equivalence problem is known as {\tt Isomorphism of Polynomials with one Secret} ({\tt IP1S}) and is a fundamental problem in multivariate cryptography. The main result is a randomized polynomial-time algorithm for solving {\tt IP1S} for quadratic instances, a particular case of importance in cryptography and somewhat justifying {\it a posteriori} the fact that {\it Graph Isomorphism} reduces to only cubic instances of {\tt IP1S} (Agrawal and Saxena). To this end, we show that {\tt IP1S} for quadratic polynomials can be reduced to a variant of the classical module isomorphism problem in representation theory, which involves to test the orthogonal simultaneous conjugacy of symmetric matrices. We show that we can essentially {\it linearize} the problem by reducing quadratic-{\tt IP1S} to test the orthogonal simultaneous similarity of symmetric matrices; this latter problem was shown by Chistov, Ivanyos and Karpinski to be equivalent to finding an invertible matrix in the linear space Kn×n\mathbb{K}^{n \times n} of n×nn \times n matrices over K\mathbb{K} and to compute the square root in a matrix algebra. While computing square roots of matrices can be done efficiently using numerical methods, it seems difficult to control the bit complexity of such methods. However, we present exact and polynomial-time algorithms for computing the square root in Kn×n\mathbb{K}^{n \times n} for various fields (including finite fields). We then consider \\#{\tt IP1S}, the counting version of {\tt IP1S} for quadratic instances. In particular, we provide a (complete) characterization of the automorphism group of homogeneous quadratic polynomials. Finally, we also consider the more general {\it Isomorphism of Polynomials} ({\tt IP}) problem where we allow an invertible linear transformation on the variables \emph{and} on the set of polynomials. A randomized polynomial-time algorithm for solving {\tt IP} when f=(x_1d,,x_nd)\mathbf{f}=(x\_1^d,\ldots,x\_n^d) is presented. From an algorithmic point of view, the problem boils down to factoring the determinant of a linear matrix (\emph{i.e.}\ a matrix whose components are linear polynomials). This extends to {\tt IP} a result of Kayal obtained for {\tt PolyProj}.Comment: Published in Journal of Complexity, Elsevier, 2015, pp.3

    Gr\"obner Bases of Bihomogeneous Ideals generated by Polynomials of Bidegree (1,1): Algorithms and Complexity

    Get PDF
    Solving multihomogeneous systems, as a wide range of structured algebraic systems occurring frequently in practical problems, is of first importance. Experimentally, solving these systems with Gr\"obner bases algorithms seems to be easier than solving homogeneous systems of the same degree. Nevertheless, the reasons of this behaviour are not clear. In this paper, we focus on bilinear systems (i.e. bihomogeneous systems where all equations have bidegree (1,1)). Our goal is to provide a theoretical explanation of the aforementionned experimental behaviour and to propose new techniques to speed up the Gr\"obner basis computations by using the multihomogeneous structure of those systems. The contributions are theoretical and practical. First, we adapt the classical F5 criterion to avoid reductions to zero which occur when the input is a set of bilinear polynomials. We also prove an explicit form of the Hilbert series of bihomogeneous ideals generated by generic bilinear polynomials and give a new upper bound on the degree of regularity of generic affine bilinear systems. This leads to new complexity bounds for solving bilinear systems. We propose also a variant of the F5 Algorithm dedicated to multihomogeneous systems which exploits a structural property of the Macaulay matrix which occurs on such inputs. Experimental results show that this variant requires less time and memory than the classical homogeneous F5 Algorithm.Comment: 31 page

    On the Complexity of the Generalized MinRank Problem

    Full text link
    We study the complexity of solving the \emph{generalized MinRank problem}, i.e. computing the set of points where the evaluation of a polynomial matrix has rank at most rr. A natural algebraic representation of this problem gives rise to a \emph{determinantal ideal}: the ideal generated by all minors of size r+1r+1 of the matrix. We give new complexity bounds for solving this problem using Gr\"obner bases algorithms under genericity assumptions on the input matrix. In particular, these complexity bounds allow us to identify families of generalized MinRank problems for which the arithmetic complexity of the solving process is polynomial in the number of solutions. We also provide an algorithm to compute a rational parametrization of the variety of a 0-dimensional and radical system of bi-degree (D,1)(D,1). We show that its complexity can be bounded by using the complexity bounds for the generalized MinRank problem.Comment: 29 page

    On the complexity of computing Gr\"obner bases for weighted homogeneous systems

    Get PDF
    Solving polynomial systems arising from applications is frequently made easier by the structure of the systems. Weighted homogeneity (or quasi-homogeneity) is one example of such a structure: given a system of weights W=(w_1,,w_n)W=(w\_{1},\dots,w\_{n}), WW-homogeneous polynomials are polynomials which are homogeneous w.r.t the weighted degree deg_W(X_1α_1,,X_nα_n)=w_iα_i\deg\_{W}(X\_{1}^{\alpha\_{1}},\dots,X\_{n}^{\alpha\_{n}}) = \sum w\_{i}\alpha\_{i}. Gr\"obner bases for weighted homogeneous systems can be computed by adapting existing algorithms for homogeneous systems to the weighted homogeneous case. We show that in this case, the complexity estimate for Algorithm~\F5 \left(\binom{n+\dmax-1}{\dmax}^{\omega}\right) can be divided by a factor (w_i)ω\left(\prod w\_{i} \right)^{\omega}. For zero-dimensional systems, the complexity of Algorithm~\FGLM nDωnD^{\omega} (where DD is the number of solutions of the system) can be divided by the same factor (w_i)ω\left(\prod w\_{i} \right)^{\omega}. Under genericity assumptions, for zero-dimensional weighted homogeneous systems of WW-degree (d_1,,d_n)(d\_{1},\dots,d\_{n}), these complexity estimates are polynomial in the weighted B\'ezout bound _i=1nd_i/_i=1nw_i\prod\_{i=1}^{n}d\_{i} / \prod\_{i=1}^{n}w\_{i}. Furthermore, the maximum degree reached in a run of Algorithm \F5 is bounded by the weighted Macaulay bound (d_iw_i)+w_n\sum (d\_{i}-w\_{i}) + w\_{n}, and this bound is sharp if we can order the weights so that w_n=1w\_{n}=1. For overdetermined semi-regular systems, estimates from the homogeneous case can be adapted to the weighted case. We provide some experimental results based on systems arising from a cryptography problem and from polynomial inversion problems. They show that taking advantage of the weighted homogeneous structure yields substantial speed-ups, and allows us to solve systems which were otherwise out of reach
    corecore